预先训练的大语言模型(LLM)(例如OpenAI Codex)通过从非正式自然语言(NL)意图中生成自然代码来自动化编码的重要方面。但是,生成的代码无法满足用户意图的任何正确性保证。实际上,很难定义正确性的概念,因为自然语言可能是模棱两可的,并且缺乏正式的语义。在本文中,我们通过提出测试驱动的用户形式化(TDUIF)的工作流程来解决以上问题的第一步,该工作流利用轻量级用户的反馈共同将用户的意图正式化为测试(部分规范) ),(b)生成符合正式用户意图的代码。要对算法进行可扩展的大规模自动化评估,而无需循环中的用户,我们描述了如何使用参考解决方案模拟用户与高保真性的互动。我们还描述并实施了几种算法组件(包括突变和排名一组测试)的替代实现,这些实现可用于有效解决TDUIF问题。我们已经开发了一个系统的Ticoder,该系统实现了多种解决方案来进行TDUIF,并将其对MBPP学术代码生成基准测试的相对有效性进行了比较。在MBPP上使用OpenAI Codex LLM的结果很有希望:我们的最佳算法将通行证@1代码生成准确度指标从48.39%提高到单个用户查询,最高为85.48%,最多可达55.48%,最多可提供5个用户查询。其次,我们可以生成与用户意图在1.69个用户查询中的非平凡功能单位测试,该数据集为90.40%的示例,用于此数据集。
translated by 谷歌翻译
In nonparametric independence testing, we observe i.i.d.\ data $\{(X_i,Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$. Modern test statistics such as the kernel Hilbert-Schmidt Independence Criterion (HSIC) and Distance Covariance (dCov) have intractable null distributions due to the degeneracy of the underlying U-statistics. Thus, in practice, one often resorts to using permutation testing, which provides a nonasymptotic guarantee at the expense of recalculating the quadratic-time statistics (say) a few hundred times. This paper provides a simple but nontrivial modification of HSIC and dCov (called xHSIC and xdCov, pronounced ``cross'' HSIC/dCov) so that they have a limiting Gaussian distribution under the null, and thus do not require permutations. This requires building on the newly developed theory of cross U-statistics by Kim and Ramdas (2020), and in particular developing several nontrivial extensions of the theory in Shekhar et al. (2022), which developed an analogous permutation-free kernel two-sample test. We show that our new tests, like the originals, are consistent against fixed alternatives, and minimax rate optimal against smooth local alternatives. Numerical simulations demonstrate that compared to the full dCov or HSIC, our variants have the same power up to a $\sqrt 2$ factor, giving practitioners a new option for large problems or data-analysis pipelines where computation, not sample size, could be the bottleneck.
translated by 谷歌翻译
Language models are widely deployed to provide automatic text completion services in user products. However, recent research has revealed that language models (especially large ones) bear considerable risk of memorizing private training data, which is then vulnerable to leakage and extraction by adversaries. In this study, we test the efficacy of a range of privacy-preserving techniques to mitigate unintended memorization of sensitive user text, while varying other factors such as model size and adversarial conditions. We test both "heuristic" mitigations (those without formal privacy guarantees) and Differentially Private training, which provides provable levels of privacy at the cost of some model performance. Our experiments show that (with the exception of L2 regularization), heuristic mitigations are largely ineffective in preventing memorization in our test suite, possibly because they make too strong of assumptions about the characteristics that define "sensitive" or "private" text. In contrast, Differential Privacy reliably prevents memorization in our experiments, despite its computational and model-performance costs.
translated by 谷歌翻译
Independence testing is a fundamental and classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data. However, practitioners often prefer procedures that adapt to the complexity of a problem at hand instead of setting sample size in advance. Ideally, such procedures should (a) allow stopping earlier on easy tasks (and later on harder tasks), hence making better use of available resources, and (b) continuously monitor the data and efficiently incorporate statistical evidence after collecting new data, while controlling the false alarm rate. It is well known that classical batch tests are not tailored for streaming data settings, since valid inference after data peeking requires correcting for multiple testing, but such corrections generally result in low power. In this paper, we design sequential kernelized independence tests (SKITs) that overcome such shortcomings based on the principle of testing by betting. We exemplify our broad framework using bets inspired by kernelized dependence measures such as the Hilbert-Schmidt independence criterion (HSIC) and the constrained-covariance criterion (COCO). Importantly, we also generalize the framework to non-i.i.d. time-varying settings, for which there exist no batch tests. We demonstrate the power of our approaches on both simulated and real data.
translated by 谷歌翻译
The kernel Maximum Mean Discrepancy~(MMD) is a popular multivariate distance metric between distributions that has found utility in two-sample testing. The usual kernel-MMD test statistic is a degenerate U-statistic under the null, and thus it has an intractable limiting distribution. Hence, to design a level-$\alpha$ test, one usually selects the rejection threshold as the $(1-\alpha)$-quantile of the permutation distribution. The resulting nonparametric test has finite-sample validity but suffers from large computational cost, since every permutation takes quadratic time. We propose the cross-MMD, a new quadratic-time MMD test statistic based on sample-splitting and studentization. We prove that under mild assumptions, the cross-MMD has a limiting standard Gaussian distribution under the null. Importantly, we also show that the resulting test is consistent against any fixed alternative, and when using the Gaussian kernel, it has minimax rate-optimal power against local alternatives. For large sample sizes, our new cross-MMD provides a significant speedup over the MMD, for only a slight loss in power.
translated by 谷歌翻译
Finding an initial noise vector that produces an input image when fed into the diffusion process (known as inversion) is an important problem in denoising diffusion models (DDMs), with applications for real image editing. The state-of-the-art approach for real image editing with inversion uses denoising diffusion implicit models (DDIMs) to deterministically noise the image to the intermediate state along the path that the denoising would follow given the original conditioning. However, DDIM inversion for real images is unstable as it relies on local linearization assumptions, which result in the propagation of errors, leading to incorrect image reconstruction and loss of content. To alleviate these problems, we propose Exact Diffusion Inversion via Coupled Transformations (EDICT), an inversion method that draws inspiration from affine coupling layers. EDICT enables mathematically exact inversion of real and model-generated images by maintaining two coupled noise vectors which are used to invert each other in an alternating fashion. Using Stable Diffusion, a state-of-the-art latent diffusion model, we demonstrate that EDICT successfully reconstructs real images with high fidelity. On complex image datasets like MS-COCO, EDICT reconstruction significantly outperforms DDIM, improving the mean square error of reconstruction by a factor of two. Using noise vectors inverted from real images, EDICT enables a wide range of image edits--from local and global semantic edits to image stylization--while maintaining fidelity to the original image structure. EDICT requires no model training/finetuning, prompt tuning, or extra data and can be combined with any pretrained DDM. Code is available at https://github.com/salesforce/EDICT.
translated by 谷歌翻译
在许多应用中,耗散较低但高接触面积的流体流动设备很重要。设计此类设备的众所周知的策略是多尺度拓扑优化(MTO),其中在每个离散域的每个单元格中设计了最佳的微观结构。不幸的是,MTO在计算上非常昂贵,因为在同质化过程的每个步骤中,必须对不断发展的微观结构进行均质化。作为替代方案,我们在这里提出了用于设计流体流量设备的分级多尺寸拓扑优化(GMTO)。在提出的方法中,使用了几种预选但大小的参数化和定向的微观结构来最佳填充域。 GMTO显着降低了计算,同时保留了MTO的许多好处。特别是,此处使用神经网络(NN)实施GMTO,因为:(1)可以离线执行均质化,并在优化过程中由NN使用,(2)它可以在优化过程中在微结构之间进行连续切换(3(3)(3)(3 )设计变量和计算工作的数量独立于所使用的微结构数量,(4)它支持自动分化,从而消除了手动灵敏度分析。提出了几个数值结果,以说明所提出的框架。
translated by 谷歌翻译
在文本中提取时间关系是自然语言理解的一个至关重要但充满挑战的问题。根据事件之间的距离,模型必须学会从事件对周围的本地和全局环境中进行不同的信息以进行时间关系预测。学习如何融合这些信息已证明对基于变压器的语言模型具有挑战性。因此,我们介绍了mulco:多尺度对比的共同训练,这是一种更好地融合本地和全球情境化特征的技术。我们的模型使用基于BERT的语言模型编码本地上下文和图形神经网络(GNN)来表示全局文档级句法和时间特征。与以前的最先进方法不同,该方法在多视图功能上使用简单的串联或使用复杂的强化学习方法选择最佳句子,我们的模型Co-Trains GNN和BERT模块使用多规模的对比度学习目标。 GNN和BERT模块通过将GNN多层多跳子图(即,全局上下文嵌入)和BERT输出(即局部上下文嵌入)进行对比,从而学习了协同参数化。我们从经验上证明,与当前的最新技术相比,Mulco提供了改进的使用Bert和GNN编码的本地和全球环境的能力。我们的实验结果表明,Mulco在几个时间关系提取数据集上实现了新的最新结果。
translated by 谷歌翻译
由摩尔定律驱动的计算系统性能的改善已改变了社会。由于这种硬件驱动的收益放缓,对于软件开发人员而言,专注于开发过程中的性能和效率变得更加重要。尽管几项研究表明了这种提高的代码效率的潜力(例如,与硬件相比,2倍更好的世代改进),但在实践中解锁这些收益是充满挑战的。关于算法复杂性以及硬件编码模式的相互作用的推理对于普通程序员来说可能是具有挑战性的,尤其是当与围绕开发速度和多人发展的务实约束结合使用时。本文旨在解决这个问题。我们分析了Google Code JAM竞争中的大型竞争编程数据集,并发现有效的代码确实很少见,中位数和第90%的解决方案之间的运行时间差异为2倍。我们建议使用机器学习以提示的形式自动提供规范反馈,以指导程序员编写高性能代码。为了自动从数据集中学习这些提示,我们提出了一种新颖的离散变异自动编码器,其中每个离散的潜在变量代表了不同的代码编辑类别,从而提高了性能。我们表明,此方法代表代码效率的多模式空间比序列到序列基线更好地编辑,并生成更有效的解决方案的分布。
translated by 谷歌翻译
代表性相似性分析是一种来自认知神经科学的方法,有助于比较来自两个不同数据源的表示。在本文中,我们建议使用代表性分析来探测代码语言模型中的语义基础。我们通过使用IBM Codenet数据集中的数据来探究Codebert模型的语义接地。通过我们的实验,我们表明当前的训练方法不会在代码的语言模型中诱导语义基础,而是专注于优化基于形式的模式。我们还表明,即使在语义相关任务上进行了一些微调,也会大大增加Codebert的语义基础。我们对Codebert模型的输入方式的消融表明,在单峰输入(仅代码)上使用双峰输入(代码和自然语言)(仅代码)可以在语义微调过程中提供更好的语义接地和样本效率。最后,我们在代码中使用语义扰动的实验表明,Codebert能够牢固地区分语义正确和不正确的代码。
translated by 谷歌翻译